Overparameterized Linear Regression under Adversarial Attacks

نویسندگان

چکیده

We study the error of linear regression in face adversarial attacks. In this framework, an adversary changes input to model order maximize prediction error. provide bounds on presence as a function parameter norm and absence such adversary. show how these make it possible using analysis from non-adversarial setups. The obtained results shed light robustness overparameterized models Adding features might be either source additional or brittleness. On one hand, we use asymptotic illustrate double-descent curves can for other derive conditions under which grow infinity more are added, while at same time, test goes zero. behavior is caused by fact that vector grows with number features. It also established $\ell_\infty$ $\ell_2$-adversarial attacks behave fundamentally differently due $\ell_1$ $\ell_2$-norms random projections concentrate. our reformulation allows solving training convex optimization problem. This then exploited establish similarities between parameter-shrinking methods affect estimated models.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Robust Sparse Regression under Adversarial Corruption

We consider high dimensional sparse regression with arbitrary – possibly, severe or coordinated – errors in the covariates matrix. We are interested in understanding how many corruptions we can tolerate, while identifying the correct support. To the best of our knowledge, neither standard outlier rejection techniques, nor recently developed robust regression algorithms (that focus only on corru...

متن کامل

Linear Regression Under Multiple

This dissertation studies the least squares estimator of a trend parameter in a simple linear regression model with multiple changepoints when the changepoint times are known. The error component in the model is allowed to be autocorrelated. The least squares estimator of the trend and the variance of the trend estimator are derived. Consistency and asymptotic normality of the trend estimator a...

متن کامل

Sparsity-based Defense against Adversarial Attacks on Linear Classifiers

Deep neural networks represent the state of the art in machine learning in a growing number of fields, including vision, speech and natural language processing. However, recent work raises important questions about the robustness of such architectures, by showing that it is possible to induce classification errors through tiny, almost imperceptible, perturbations. Vulnerability to such “adversa...

متن کامل

Stability in Heterogeneous Multimedia Networks under Adversarial Attacks

A distinguishing feature of today's large-scale platforms for multimedia distribution and communication, such as the Internet, is their heterogeneity, predominantly manifested by the fact that a variety of communication protocols are simultaneously running over different hosts. A fundamental question that naturally arises for such common settings of heterogeneous multimedia systems concerns the...

متن کامل

Robust Sparse Regression under Adversarial Corruption Supplementary Material

Recall that y = [yA; yO] and X = [XA;XO] with yA = XAβ∗ + e, and Λ∗ is the true support. The adversary fixes some set Λ̂ disjoint from the true support Λ∗ with |Λ̂| = |Λ∗|. It then chooses β̂ and yO such that β̂Λ̂ = β ∗ Λ∗ β̂Λ̂c = 0, and y O = XOβ̂ with XO to be determined later. By assumption we have h(β̂) = h(β∗) ≤ R, so β̂ is feasible. Its objective value is f(y − Xβ̂) = f([yA − XA Λ̂ β∗ Λ∗ ; 0]) ≤ C fo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Signal Processing

سال: 2023

ISSN: ['1053-587X', '1941-0476']

DOI: https://doi.org/10.1109/tsp.2023.3246228